Previous Blogs

July 25, 2017
The Value of Limits

July 18, 2017
Tech in the Heartland

June 27, 2017
Business Realities vs. Tech Dreams

June 20, 2017
The Power of Hidden Tech

June 13, 2017
Computing Evolves from Outside In to Inside Out

June 6, 2017
The Overlooked Surprises of Apple’s WWDC Keynote

May 30, 2017
Are AR and VR Only for Special Occasions?

May 23, 2017
The Digital Car

May 16, 2017
Digital Assistants Drive New Meta-Platform Battle

May 9, 2017
Getting Smart on Smart Speakers

May 5, 2017
Intel Opens High-Tech "Garage"

May 2, 2017
The Hidden Value of Analog

April 28, 2017
Google’s Waymo Starts Driving Passengers

April 25, 2017
The Robotic Future

April 21, 2017
Sony Debuts New Pro Camera

April 18, 2017
Should Apple Build a Car?

April 14, 2017
PC Market Outlook Improving

April 11, 2017
Little Data Analytics

April 7, 2017
Facebook Debuts Free Version of Workplace Collaboration Tool

April 4, 2017
Samsung Building a Platform Without an OS

March 31, 2017
Microsoft Announces Windows 10 Creators Update Release Date

March 28, 2017
Augmented Reality Finally Delivers on 3D Promise

March 24, 2017
Intel Creates AI Organization

March 21, 2017
Chip Magic

March 17, 2017
Microsoft Unveils Teams Chat App

March 14, 2017
Computing on the Edge

March 7, 2017
Cars Need Digital Safety Standards Too

February 28, 2017
The Messy Path to 5G

February 24, 2017
AMD Launches Ryzen CPU

February 21, 2017
Rethinking Wearable Computing

February 17, 2017
Samsung Heir Arrest Unlikely to Impact Sales

February 14, 2017
Modern Workplaces Still More Vision Than Reality

February 10, 2017
Lenovo Develops Energy-Efficient Soldering Technology

February 7, 2017
The Missing Map from Silicon Valley to Main Street

January 31, 2017
The Network vs. The Computer

January 27, 2017
Facebook Adds Support For FIDO Security Keys

January 24, 2017
Voice Drives New Software Paradigm

January 20, 2017
Tesla Cleared of Fault in NHTSA Crash Probe

January 17, 2017
Inside the Mind of a Hacker

January 13, 2017
PC Shipments Stumble but Turnaround is Closer

January 10, 2017
Takeaways from CES 2017

January 3, 2017
Top 10 Tech Predictions for 2017

2016 Blogs

2015 Blogs

2014 Blogs


2013 Blogs

















TECHnalysis Research Blog

August 1, 2017
Smarter Computing

By Bob O'Donnell

Work smarter, not harder. That’s the phrase that people like to use when talking about how being more efficient in one’s efforts can often have a greater reward.

It’s also starting to become particularly appropriate for some of the latest advances in semiconductor chip design and artificial intelligence-based software efforts. For many years, much of the effort in silicon computing advancements was focused on cramming more transistors running at faster speeds into the same basic architectures. So, CPUs, for example, became bigger and faster, but they were still fundamentally CPUs. Many of the software advancements, in turn, were accomplished by running some of the same basic algorithms and program elements faster.

Several recent announcements from AMD and nVidia, as well as ongoing work by Qualcomm, Intel and others, however, highlight how those rules have radically changed. From new types of chip designs, to different combinations of chip elements, and clever new software tools and methodologies to better take advantage of these chip architectures, we’re on the cusp of seeing a whole new range of radically smarter types of silicon that are going to start enabling the science fiction-like applications that we’ve started to see small glimpses of.

From photorealistic augmented and virtual reality experiences, to truly intelligent assistants and robots, these new hardware chip designs and software efforts are closer to making the impossible seem a lot more possible.

Part of the reason for this is basic physics. While we can argue about the validity of being able to continue the Moore’s Law inspired performance improvements that have given the semiconductor industry a staggering degree of advancements over the last 50 years, there is no denying that things like the clock speeds for CPUs, GPUs and other key types of chips stalled out several years ago. As a result, semiconductor professionals have started to tackle the problem of moving performance forward in very different ways.

In addition, we’ve started to see a much wider array of tasks, or workloads, that today’s semiconductors are being asked to perform. Image recognition, ray tracing, 4K and 8K video editing, highly demanding games, and artificial intelligence-based work are all making it clear that these new kinds of chip design efforts are going to be essential to meet the smarter computing needs of the future.

Specifically, we’ve seen a tremendous rise in interest, awareness, and development of new chip architectures. GPUs have led the charge here, but we’re seeing things like FPGAs (field programmable gate arrays)—such as those from the Altera division of Intel—and dedicated AI chips from the likes of Intel’s new Nervana division, as well as chip newcomers Google and Microsoft, start to make a strong presence.

We’re also seeing interesting new designs within more traditional chip architectures. AMD’s new high-end Threadripper desktop CPU leverages the company’s Epyc server design and combines multiple independent CPU dies connected together over a high-speed Infinity Fabric connection to drive new levels of performance. This is a radically different take than the traditional concept of just making individual CPU dies bigger and faster. In the future, we could also see different types of semiconductor components (even from companies other than AMD) integrated into a single package all connected over this Infinity Fabric.

This notion of multiple computing parts working together as a heterogeneous whole is seeing many types of iterations. Qualcomm’s work on its Snapdragon SOCs over the last several years, for example, has been to combine CPUs, GPUs, DSPs (digital signal processors) and other unique hardware “chunks” into a coherent hole. Just last week, the company added a new AI software development kit (SDK) that intelligently assigns different types of AI workloads to different components of a Snapdragon—all in an effort to give the best possible performance.

Yet another variation can come from attaching high-end and power demanding external GPUs (or other components) to notebooks via the Thunderbolt 3 standard. Apple showed this with an AMD-based external graphics card at their last event and this week at the SIGGRAPH computer graphics conference, nVidia introduced two entries of its own to the eGPU market.

The developments also go beyond hardware. While many people are (justifiably) getting tired of hearing about how seemingly everything is being enhanced with AI, nVidia showed a compelling demo at their SIGGRAPH press conference in which the highly compute-intensive task of ray-tracing a complex image was sped up tremendously by leveraging an AI-created improvement in rendering. Essentially, nVidia used GPUs to “train” a neural network how to ray-trace certain types of images, then converted that “knowledge” into algorithms that different GPUs can use to redraw and move around very complex images, very quickly. It was a classic demonstration of how the brute force advancements we’ve traditionally seen in GPUs (or CPUs) can be surpassed with smarter ways of using those tools.

After seeming to stall for a while, the performance requirements for newer applications are becoming clear—and the amount of work that’s still needed to get there is becoming clearer still. The only way we can start to achieve these new performance levels is with the types of heterogeneous chip architecture designs and radically different software approaches that are starting to appear.

Though some of these advances have been discussed in theory for a while, it’s only now that they’ve begun to appear. Not only are we seeing important steps forward, but we are also beginning to see the fog lift as to the future of these technologies and where the tech industry is headed. The image ahead is starting to look pretty good.

Here's a link to the column: https://techpinions.com/smarter-computing/50679

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtech.

Podcasts
Leveraging more than 10 years of award-winning, professional radio experience, TECHnalysis Research participates in a video-based podcast called Everything Technology.
LEARN MORE
  Research Offerings
TECHnalysis Research offers a wide range of research deliverables that you can read about here.
READ MORE